32 research outputs found
Reconstructing a Large-Scale Attribute-Based Social Network
An epidemic occurs when a disease rapidly infects substantially more people than expected compared to past experience of similar diseases. If an epidemic is not contained, it could turn into a pandemic, which will cause a worldwide crisis. Therefore, it is critical to determine and implement epidemic policies that are promising and effective within a short period of time. In this paper, we will develop tools that will allow us to recreate large-scale real-world social networks. Using such networks will enable us to simulate disease spread and determine critical personal and social factors that will be the key to containing or even preventing an epidemic event. We begin by developing an attribute-based social network infrastructure with the objectives of: efficiency, modularity, and functionality in mind. Next, real-world data from public sources are analyzed and imported into the infrastructure to reconstruct a real-world social network. The resulting social network is predicted to be an accurate representation of the data used to create the network since properties in the network are matched with actual publicly available census data with a percent error less than 0.03. The tools and methods developed in this paper will allow simulation and analysis to be performed on real-world social network, which will provide crucial information on determining effective epidemic policies within an extremely short period of time
Constructing Representative Social Networks for Disease Simulation
Diseases spread mechanisms have been questioned and studied for many years. The ability to make predictions about an epidemic could enable scientists to evaluate inoculation/ isolation plans, control the mortality rate, and prevent the future course of an outbreak. Accurate predictions, however, are extremely hard to make and requires interdisciplinary solutions from epidemiology, sociology, statistics, graph theory, and Computer Science. For making the prediction, currently few network analysis platforms written in C++ are designed and optimized. The research is aimed to address this gap by exploring methods of constructing social contact network, simulating disease spreading, and proposing mitigation strategies for use by public health officials. We present a network construction and simulation library which allows studying the progress of an epidemic in a large scale social contact network. The library easily manipulates large graphs, generates regular and random graphs, and supports various compartmental models in epidemiology. The library allows network construction of the population given publicly available census information in the geographical area under consideration. Given the characteristics of the disease, the library is able to simulate a single or multiple outbreaks over the network. Standard outputs are the evolution of the prevalence of the disease and different possible mitigation strategies given a variety of constraints. The network analysis platform serves as a handy tool designed to help us to understand the paths followed by outbreaks in a given community and to generate strategies for preventing and controlling them
Symmetry Induction in Computational Intelligence
Symmetry has been a very useful tool to researchers in various scientific fields. At its most basic,
symmetry refers to the invariance of an object to some transformation, or set of transformations.
Usually one searches for, and uses information concerning an existing symmetry within given data,
structure or concept to somehow improve algorithm performance or compress the search space.
This thesis examines the effects of imposing or inducing symmetry on a search space. That is, the
question being asked is whether only existing symmetries can be useful, or whether changing
reference to an intuition-based definition of symmetry over the evaluation function can also be of
use. Within the context of optimization, symmetry induction as defined in this thesis will have the
effect of equating the evaluation of a set of given objects.
Group theory is employed to explore possible symmetrical structures inherent in a search space.
Additionally, conditions when the search space can have a symmetry induced on it are examined. The
idea of a neighborhood structure then leads to the idea of opposition-based computing which aims
to induce a symmetry of the evaluation function. In this context, the search space can be seen as
having a symmetry imposed on it. To be useful, it is shown that an opposite map must be defined
such that it equates elements of the search space which have a relatively large difference in their
respective evaluations. Using this idea a general framework for employing opposition-based ideas
is proposed. To show the efficacy of these ideas, the framework is applied to popular computational
intelligence algorithms within the areas of Monte Carlo optimization, estimation of distribution and
neural network learning.
The first example application focuses on simulated annealing, a popular Monte Carlo optimization
algorithm. At a given iteration, symmetry is induced on the system by considering opposite
neighbors. Using this technique, a temporary symmetry over the neighborhood region is induced.
This simple algorithm is benchmarked using common real optimization problems and compared against
traditional simulated annealing as well as a randomized version. The results highlight improvements
in accuracy, reliability and convergence rate. An application to image thresholding further
confirms the results.
Another example application, population-based incremental learning, is rooted in estimation of
distribution algorithms. A major problem with these techniques is a rapid loss of diversity within
the samples after a relatively low number of iterations. The opposite sample is introduced as a
remedy to this problem. After proving an increased diversity, a new probability update procedure is
designed. This opposition-based version of the algorithm is benchmarked using common binary
optimization problems which have characteristics of deceptivity and attractive basins
characteristic of difficult real world problems. Experiments reveal improvements in diversity,
accuracy, reliability and convergence rate over the traditional approach. Ten instances of the
traveling salesman problem and six image thresholding problems are used to further highlight the
improvements.
Finally, gradient-based learning for feedforward neural networks is improved using opposition-based
ideas. The opposite transfer function is presented as a simple adaptive neuron which easily allows
for efficiently jumping in weight space. It is shown that each possible opposite network represents
a unique input-output mapping, each having an associated effect on the numerical conditioning of
the network. Experiments confirm the potential of opposite networks during pre- and early training
stages. A heuristic for efficiently selecting one opposite network per epoch is presented.
Benchmarking focuses on common classification problems and reveals improvements in accuracy,
reliability, convergence rate and generalization ability over common backpropagation variants. To
further show the potential, the heuristic is applied to resilient propagation where similar
improvements are also found
Stochastic Multiple Gradient Decent for Inferring Action-Based Network Generators
Networked systems, like the internet, social networks etc., have in recent years attracted the attention of researchers, specifically to develop models that can help us understand or predict the behavior of these systems. A way of achieving this is through network generators, which are algorithms that can synthesize networks with statistically similar properties to a given target network. Action-based Network Generators (ABNG)is one of these algorithms that defines actions as strategies for nodes to form connections with other nodes, hence generating networks. ABNG is parametrized using an action matrix that assigns an empirical probability distribution to vertices for choosing specific actions. For a given target network, ABNG formulates the problem of estimating an action matrix as a multi-objective optimization problem, which in turn requires an algorithm to determine a Pareto set of action matrices that can generate networks statistically similar to the target. We propose using a population based stochastic multiple gradient descent algorithm to estimate this Pareto set. Results showing the properties of networks optimized using the gradient based algorithm are presented. A comparison is also performed with the previous approach used for optimization
A morphospace of functional configuration to assess configural breadth based on brain functional networks
The best approach to quantify human brain functional reconfigurations in
response to varying cognitive demands remains an unresolved topic in network
neuroscience. We propose that such functional reconfigurations may be
categorized into three different types: i) Network Configural Breadth, ii)
Task-to-Task transitional reconfiguration, and iii) Within-Task
reconfiguration. In order to quantify these reconfigurations, we propose a
mesoscopic framework focused on functional networks (FNs) or communities. To do
so, we introduce a 2D network morphospace that relies on two novel mesoscopic
metrics, Trapping Efficiency (TE) and Exit Entropy (EE), which capture topology
and integration of information within and between a reference set of FNs. In
this study, we use this framework to quantify the Network Configural Breadth
across different tasks. We show that the metrics defining this morphospace can
differentiate FNs, cognitive tasks and subjects. We also show that network
configural breadth significantly predicts behavioral measures, such as episodic
memory, verbal episodic memory, fluid intelligence and general intelligence. In
essence, we put forth a framework to explore the cognitive space in a
comprehensive manner, for each individual separately, and at different levels
of granularity. This tool that can also quantify the FN reconfigurations that
result from the brain switching between mental states.Comment: main article: 24 pages, 8 figures, 2 tables. supporting information:
11 pages, 5 figure
Geodesic distance on optimally regularized functional connectomes uncovers individual fingerprints
Background: Functional connectomes (FCs), have been shown to provide a
reproducible individual fingerprint, which has opened the possibility of
personalized medicine for neuro/psychiatric disorders. Thus, developing
accurate ways to compare FCs is essential to establish associations with
behavior and/or cognition at the individual-level.
Methods: Canonically, FCs are compared using Pearson's correlation
coefficient of the entire functional connectivity profiles. Recently, it has
been proposed that the use of geodesic distance is a more accurate way of
comparing functional connectomes, one which reflects the underlying
non-Euclidean geometry of the data. Computing geodesic distance requires FCs to
be positive-definite and hence invertible matrices. As this requirement depends
on the fMRI scanning length and the parcellation used, it is not always
attainable and sometimes a regularization procedure is required.
Results: In the present work, we show that regularization is not only an
algebraic operation for making FCs invertible, but also that an optimal
magnitude of regularization leads to systematically higher fingerprints. We
also show evidence that optimal regularization is dataset-dependent, and varies
as a function of condition, parcellation, scanning length, and the number of
frames used to compute the FCs.
Discussion: We demonstrate that a universally fixed regularization does not
fully uncover the potential of geodesic distance on individual fingerprinting,
and indeed could severely diminish it. Thus, an optimal regularization must be
estimated on each dataset to uncover the most differentiable across-subject and
reproducible within-subject geodesic distances between FCs. The resulting
pairwise geodesic distances at the optimal regularization level constitute a
very reliable quantification of differences between subjects.Comment: 39 pages, 7 figures, 4 table
Modeling the spread of the Zika virus by sexual and mosquito transmission.
Zika Virus (ZIKV) is a flavivirus that is transmitted predominantly by the Aedes species of mosquito, but also through sexual contact, blood transfusions, and congenitally from mother to child. Although approximately 80% of ZIKV infections are asymptomatic and typical symptoms are mild, multiple studies have demonstrated a causal link between ZIKV and severe diseases such as Microcephaly and Guillain Barré Syndrome. Two goals of this study are to improve ZIKV models by considering the spread dynamics of ZIKV as both a vector-borne and sexually transmitted disease, and also to approximate the degree of under-reporting. In order to accomplish these objectives, we propose a compartmental model that allows for the analysis of spread dynamics as both a vector-borne and sexually transmitted disease, and fit it to the ZIKV incidence reported to the National System of Public Health Surveillance in 27 municipalities of Colombia between January 1 2015 and December 31 2017. We demonstrate that our model can represent the infection patterns over this time period with high confidence. In addition, we argue that the degree of under-reporting is also well estimated. Using the model we assess potential viability of public health scenarios for mitigating disease spread and find that targeting the sexual pathway alone has negligible impact on overall spread, but if the proportion of risky sexual behavior increases then it may become important. Targeting mosquitoes remains the best approach of those considered. These results may be useful for public health organizations and governments to construct and implement suitable health policies and reduce the impact of the Zika outbreaks